221 research outputs found

    Some matrix nearness problems suggested by Tikhonov regularization

    Full text link
    The numerical solution of linear discrete ill-posed problems typically requires regularization, i.e., replacement of the available ill-conditioned problem by a nearby better conditioned one. The most popular regularization methods for problems of small to moderate size are Tikhonov regularization and truncated singular value decomposition (TSVD). By considering matrix nearness problems related to Tikhonov regularization, several novel regularization methods are derived. These methods share properties with both Tikhonov regularization and TSVD, and can give approximate solutions of higher quality than either one of these methods

    Fractional regularization matrices for linear discrete ill-posed problems

    Get PDF
    The numerical solution of linear discrete ill-posed problems typically requires regularization. Two of the most popular regularization methods are due to Tikhonov and Lavrentiev. These methods require the choice of a regularization matrix. Common choices include the identity matrix and finite difference approximations of a derivative operator. It is the purpose of the present paper to explore the use of fractional powers of the matrices {Mathematical expression} (for Tikhonov regularization) and A (for Lavrentiev regularization) as regularization matrices, where A is the matrix that defines the linear discrete ill-posed problem. Both small- and large-scale problems are considered. © 2013 Springer Science+Business Media Dordrecht

    On the generation of Krylov subspace bases

    Get PDF
    Many problems in scientific computing involving a large sparse matrix A are solved by Krylov subspace methods. This includes methods for the solution of large linear systems of equations with A, for the computation of a few eigenvalues and associated eigenvectors of A, and for the approximation of nonlinear matrix functions of A. When the matrix A is non-Hermitian, the Arnoldi process commonly is used to compute an orthonormal basis of a Krylov subspace associated with A. The Arnoldi process often is implemented with the aid of the modified Gram-Schmidt method. It is well known that the latter constitutes a bottleneck in parallel computing environments, and to some extent also on sequential computers. Several approaches to circumvent orthogonalization by the modified Gram-Schmidt method have been described in the literature, including the generation of Krylov subspace bases with the aid of suitably chosen Chebyshev or Newton polynomials. We review these schemes and describe new ones. Numerical examples are presented

    A new representation of generalized averaged Gauss quadrature rules

    Get PDF
    Gauss quadrature rules associated with a nonnegative measure with support on (part of) the real axis find many applications in Scientific Computing. It is important to be able to estimate the quadrature error when replacing an integral by an l-node Gauss quadrature rule in order to choose a suitable number of nodes. A classical approach to estimate this error is to evaluate the associated (2l + 1)-node Gauss-Kronrod rule. However, Gauss-Kronrod rules with 2l + 1 real nodes might not exist. The (2l + 1)-node generalized averaged Gauss formula associated with the l-node Gauss rule described in Spalevic (2007) [16] is guaranteed to exist and provides an attractive alternative to the (2l + 1)-node Gauss-Kronrod rule. This paper describes a new representation of generalized averaged Gauss formulas that is cheaper to evaluate than the available representation

    Simple Square Smoothing Regularization Operators

    Get PDF
    Tikhonov regularization of linear discrete ill-posed problems often is applied with a finite difference regularization operator that approximates a low-order derivative. These operators generally are represented by a banded rectangular matrix with fewer rows than columns. They therefore cannot be applied in iterative methods that are based on the Arnoldi process, which requires the regularization operator to be represented by a square matrix. This paper discusses two approaches to circumvent this difficulty: zero-padding the rectangular matrices to make them square and extending the rectangular matrix to a square circulant. We also describe how to combine these operators by weighted averaging and with orthogonal projection. Applications to Arnoldi and Lanczos bidiagonalization-based Tikhonov regularization, as well as to truncated iteration with a range-restricted minimal residual method, are presented

    A new representation of generalized averaged Gauss quadrature rules

    Get PDF
    Gauss quadrature rules associated with a nonnegative measure with support on (part of) the real axis find many applications in Scientific Computing. It is important to be able to estimate the quadrature error when replacing an integral by an l-node Gauss quadrature rule in order to choose a suitable number of nodes. A classical approach to estimate this error is to evaluate the associated (2l + 1)-node Gauss-Kronrod rule. However, Gauss-Kronrod rules with 2l + 1 real nodes might not exist. The (2l + 1)-node generalized averaged Gauss formula associated with the l-node Gauss rule described in Spalevic (2007) [16] is guaranteed to exist and provides an attractive alternative to the (2l + 1)-node Gauss-Kronrod rule. This paper describes a new representation of generalized averaged Gauss formulas that is cheaper to evaluate than the available representation

    Augmented Implicitly Restarted Lanczos Bidiagonalization Methods

    Get PDF
    New restarted Lanczos bidiagonalization methods for the computation of a few of the largest or smallest singular values of a large matrix are presented. Restarting is carried out by augmentation of Krylov subspaces that arise naturally in the standard Lanczos bidiagonalization method. The augmenting vectors are associated with certain Ritz or harmonic Ritz vectors. Computed examples show the new methods to be competitive with available schemes

    Rational Averaged Gauss Quadrature Rules

    Get PDF
    It is important to be able to estimate the quadrature error in Gauss rules. Several approaches have been developed, including the evaluation of associated Gauss-Kronrod rules (if they exist), or the associated averaged Gauss and generalized averaged Gauss rules. Integrals with certain integrands can be approximated more accurately by rational Gauss rules than by Gauss rules. This paper introduces associated rational averaged Gauss rules and rational generalized averaged Gauss rules, which can be used to estimate the error in rational Gauss rules. Also rational Gauss-Kronrod rules are discussed. Computed examples illustrate the accuracy of the error estimates determined by these quadrature rules
    • …
    corecore